Goto

Collaborating Authors

 potential outcome









On Nonasymptotic Confidence Intervals for Treatment Effects in Randomized Experiments

Sandoval, Ricardo J., Balakrishnan, Sivaraman, Feller, Avi, Jordan, Michael I., Waudby-Smith, Ian

arXiv.org Machine Learning

We study nonasymptotic (finite-sample) confidence intervals for treatment effects in randomized experiments. In the existing literature, the effective sample sizes of nonasymptotic confidence intervals tend to be looser than the corresponding central-limit-theorem-based confidence intervals by a factor depending on the square root of the propensity score. We show that this performance gap can be closed, designing nonasymptotic confidence intervals that have the same effective sample size as their asymptotic counterparts. Our approach involves systematic exploitation of negative dependence or variance adaptivity (or both). We also show that the nonasymptotic rates that we achieve are unimprovable in an information-theoretic sense.


Identification and Estimation under Multiple Versions of Treatment: Mixture-of-Experts Approach

Yoshikawa, Kohei, Kawano, Shuichi

arXiv.org Machine Learning

Identification and Estimation under Multiple Versions of Treatment: Mixture-of-Experts Approach Kohei Y oshikawa Shuichi Kawano January 5, 2026 Abstract The Stable Unit Treatment Value Assumption (SUTV A) includes the condition that there are no multiple versions of treatment in causal inference. Though we could not control the implementation of treatment in observational studies, multiple versions may exist in the treatment. It has been pointed out that ignoring such multiple versions of treatment can lead to biased estimates of causal effects, but a causal inference framework that explicitly deals with the unbiased identification and estimation of version-specific causal effects has not been fully developed yet. Thus, obtaining a deeper understanding for mechanisms of the complex treatments is difficult. In this paper, we introduce the Mixture-of-Experts framework into causal inference and develop a methodology for estimating the causal effects of latent versions. This approach enables explicit estimation of version-specific causal effects even if the versions are not observed. Numerical experiments demonstrate the effectiveness of the proposed method. Keywords causal inference multiple versions of treatment compound treatments mixture-of-experts EM algorithm 1 Introduction In the theory of causal inference, a fundamental starting point is the potential outcomes framework since Rubin (1980), whose core assumption is the Stable Unit Treatment Value Assumption (SUTV A).


DiffPO: A causal diffusion model for learning distributions of potential outcomes

Neural Information Processing Systems

Predicting potential outcomes of interventions from observational data is crucial for decision-making in medicine, but the task is challenging due to the fundamental problem of causal inference. Existing methods are largely limited to point estimates of potential outcomes with no uncertain quantification; thus, the full information about the distributions of potential outcomes is typically ignored. In this paper, we propose a novel causal diffusion model called DiffPO, which is carefully designed for reliable inferences in medicine by learning the distribution of potential outcomes. In our DiffPO, we leverage a tailored conditional denoising diffusion model to learn complex distributions, where we address the selection bias through a novel orthogonal diffusion loss. Another strength of our DiffPO method is that it is highly flexible (e.g., it can also be used to estimate different causal quantities such as CATE). Across a wide range of experiments, we show that our method achieves state-of-the-art performance.